1,061 research outputs found
Discussion of "Frequentist coverage of adaptive nonparametric Bayesian credible sets"
Discussion of "Frequentist coverage of adaptive nonparametric Bayesian
credible sets" by Szab\'o, van der Vaart and van Zanten [arXiv:1310.4489v5].Comment: Published at http://dx.doi.org/10.1214/15-AOS1270D in the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Adaptive confidence balls
Adaptive confidence balls are constructed for individual resolution levels as
well as the entire mean vector in a multiresolution framework. Finite sample
lower bounds are given for the minimum expected squared radius for confidence
balls with a prespecified confidence level. The confidence balls are centered
on adaptive estimators based on special local block thresholding rules. The
radius is derived from an analysis of the loss of this adaptive estimator. In
addition adaptive honest confidence balls are constructed which have guaranteed
coverage probability over all of and expected squared radius
adapting over a maximum range of Besov bodies.Comment: Published at http://dx.doi.org/10.1214/009053606000000146 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Nonparametric estimation over shrinking neighborhoods: Superefficiency and adaptation
A theory of superefficiency and adaptation is developed under flexible
performance measures which give a multiresolution view of risk and bridge the
gap between pointwise and global estimation. This theory provides a useful
benchmark for the evaluation of spatially adaptive estimators and shows that
the possible degree of superefficiency for minimax rate optimal estimators
critically depends on the size of the neighborhood over which the risk is
measured. Wavelet procedures are given which adapt rate optimally for given
shrinking neighborhoods including the extreme cases of mean squared error at a
point and mean integrated squared error over the whole interval. These adaptive
procedures are based on a new wavelet block thresholding scheme which combines
both the commonly used horizontal blocking of wavelet coefficients (at the same
resolution level) and vertical blocking of coefficients (across different
resolution levels).Comment: Published at http://dx.doi.org/10.1214/009053604000000832 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
A complement to Le Cam's theorem
This paper examines asymptotic equivalence in the sense of Le Cam between
density estimation experiments and the accompanying Poisson experiments. The
significance of asymptotic equivalence is that all asymptotically optimal
statistical procedures can be carried over from one experiment to the other.
The equivalence given here is established under a weak assumption on the
parameter space . In particular, a sharp Besov smoothness
condition is given on which is sufficient for Poissonization,
namely, if is in a Besov ball with . Examples show Poissonization is not possible whenever .
In addition, asymptotic equivalence of the density estimation model and the
accompanying Poisson experiment is established for all compact subsets of
, a condition which includes all H\"{o}lder balls with smoothness
.Comment: Published at http://dx.doi.org/10.1214/009053607000000091 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
On adaptive estimation of linear functionals
Adaptive estimation of linear functionals over a collection of parameter
spaces is considered. A between-class modulus of continuity, a geometric
quantity, is shown to be instrumental in characterizing the degree of
adaptability over two parameter spaces in the same way that the usual modulus
of continuity captures the minimax difficulty of estimation over a single
parameter space. A general construction of optimally adaptive estimators based
on an ordered modulus of continuity is given. The results are complemented by
several illustrative examples.Comment: Published at http://dx.doi.org/10.1214/009053605000000633 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Nonquadratic estimators of a quadratic functional
Estimation of a quadratic functional over parameter spaces that are not
quadratically convex is considered. It is shown, in contrast to the theory for
quadratically convex parameter spaces, that optimal quadratic rules are often
rate suboptimal. In such cases minimax rate optimal procedures are constructed
based on local thresholding. These nonquadratic procedures are sometimes fully
efficient even when optimal quadratic rules have slow rates of convergence.
Moreover, it is shown that when estimating a quadratic functional nonquadratic
procedures may exhibit different elbow phenomena than quadratic procedures.Comment: Published at http://dx.doi.org/10.1214/009053605000000147 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Optimal adaptive estimation of a quadratic functional
Adaptive estimation of a quadratic functional over both Besov and balls
is considered. A collection of nonquadratic estimators are developed which have
useful bias and variance properties over individual Besov and balls. An
adaptive procedure is then constructed based on penalized maximization over
this collection of nonquadratic estimators. This procedure is shown to be
optimally rate adaptive over the entire range of Besov and balls in the
sense that it attains certain constrained risk bounds.Comment: Published at http://dx.doi.org/10.1214/009053606000000849 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
An adaptation theory for nonparametric confidence intervals
A nonparametric adaptation theory is developed for the construction of
confidence intervals for linear functionals. A between class modulus of
continuity captures the expected length of adaptive confidence intervals. Sharp
lower bounds are given for the expected length and an ordered modulus of
continuity is used to construct adaptive confidence procedures which are within
a constant factor of the lower bounds. In addition, minimax theory over
nonconvex parameter spaces is developed.Comment: Published at http://dx.doi.org/10.1214/009053604000000049 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Adaptive confidence intervals for regression functions under shape constraints
Adaptive confidence intervals for regression functions are constructed under
shape constraints of monotonicity and convexity. A natural benchmark is
established for the minimum expected length of confidence intervals at a given
function in terms of an analytic quantity, the local modulus of continuity.
This bound depends not only on the function but also the assumed function
class. These benchmarks show that the constructed confidence intervals have
near minimum expected length for each individual function, while maintaining a
given coverage probability for functions within the class. Such adaptivity is
much stronger than adaptive minimaxity over a collection of large parameter
spaces.Comment: Published in at http://dx.doi.org/10.1214/12-AOS1068 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Estimation and confidence sets for sparse normal mixtures
For high dimensional statistical models, researchers have begun to focus on
situations which can be described as having relatively few moderately large
coefficients. Such situations lead to some very subtle statistical problems. In
particular, Ingster and Donoho and Jin have considered a sparse normal means
testing problem, in which they described the precise demarcation or detection
boundary. Meinshausen and Rice have shown that it is even possible to estimate
consistently the fraction of nonzero coordinates on a subset of the detectable
region, but leave unanswered the question of exactly in which parts of the
detectable region consistent estimation is possible. In the present paper we
develop a new approach for estimating the fraction of nonzero means for
problems where the nonzero means are moderately large. We show that the
detection region described by Ingster and Donoho and Jin turns out to be the
region where it is possible to consistently estimate the expected fraction of
nonzero coordinates. This theory is developed further and minimax rates of
convergence are derived. A procedure is constructed which attains the optimal
rate of convergence in this setting. Furthermore, the procedure also provides
an honest lower bound for confidence intervals while minimizing the expected
length of such an interval. Simulations are used to enable comparison with the
work of Meinshausen and Rice, where a procedure is given but where rates of
convergence have not been discussed. Extensions to more general Gaussian
mixture models are also given.Comment: Published in at http://dx.doi.org/10.1214/009053607000000334 the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …